4 research outputs found

    The end of vagueness : technological epistemicism, surveillance capitalism, and explainable Artificial Intelligence

    Get PDF
    Artificial Intelligence (AI) pervades humanity in 2022, and it is notoriously difficult to understand how certain aspects of it work. There is a movement—Explainable Artificial Intelligence (XAI)—to develop new methods for explaining the behaviours of AI systems. We aim to highlight one important philosophical significance of XAI—it has a role to play in the elimination of vagueness. To show this, consider that the use of AI in what has been labeled surveillance capitalism has resulted in humans quickly gaining the capability to identify and classify most of the occasions in which languages are used. We show that the knowability of this information is incompatible with what a certain theory of vagueness—epistemicism—says about vagueness. We argue that one way the epistemicist could respond to this threat is to claim that this process brought about the end of vagueness. However, we suggest an alternative interpretation, namely that epistemicism is false, but there is a weaker doctrine we dub technological epistemicism, which is the view that vagueness is due to ignorance of linguistic usage, but the ignorance can be overcome. The idea is that knowing more of the relevant data and how to process it enables us to know the semantic values of our words and sentences with higher confidence and precision. Finally, we argue that humans are probably not going to believe what future AI algorithms tell us about the sharp boundaries of our vague words unless the AI involved can be explained in terms understandable by humans. That is, if people are going to accept that AI can tell them about the sharp boundaries of the meanings of their words, then it is going to have to be XAI.Publisher PDFPeer reviewe

    Scorekeeping in a defective language game

    No full text
    One common criticism of deflationism is that it does not have the resources to explain defective discourse (e.g., vagueness, referential indeterminacy, confusion, etc.). This problem is especially pressing for someone like Robert Brandom, who not only endorses deflationist accounts of truth, reference, and predication, but also refuses to use representational relations to explain content and propositional attitudes. To address this problem, I suggest that Brandom should explain defective discourse in terms of what it is to treat some portion of discourse as defective. To illustrate this strategy, I present an extension of his theory of content and use it to provide an explanation of confusion. The result is a theory of confusion based on Joseph Camp's recent treatment. The extension of Brandom's theory of content involves additions to his account of scorekeeping that allow members of a discursive practice to accept different standards of inferential correctness.</p
    corecore